43 research outputs found
Repulsive Deep Ensembles are Bayesian
Deep ensembles have recently gained popularity in the deep learning community
for their conceptual simplicity and efficiency. However, maintaining functional
diversity between ensemble members that are independently trained with gradient
descent is challenging. This can lead to pathologies when adding more ensemble
members, such as a saturation of the ensemble performance, which converges to
the performance of a single model. Moreover, this does not only affect the
quality of its predictions, but even more so the uncertainty estimates of the
ensemble, and thus its performance on out-of-distribution data. We hypothesize
that this limitation can be overcome by discouraging different ensemble members
from collapsing to the same function. To this end, we introduce a kernelized
repulsive term in the update rule of the deep ensembles. We show that this
simple modification not only enforces and maintains diversity among the members
but, even more importantly, transforms the maximum a posteriori inference into
proper Bayesian inference. Namely, we show that the training dynamics of our
proposed repulsive ensembles follow a Wasserstein gradient flow of the KL
divergence with the true posterior. We study repulsive terms in weight and
function space and empirically compare their performance to standard ensembles
and Bayesian baselines on synthetic and real-world prediction tasks
SOM-VAE: Interpretable Discrete Representation Learning on Time Series
High-dimensional time series are common in many domains. Since human
cognition is not optimized to work well in high-dimensional spaces, these areas
could benefit from interpretable low-dimensional representations. However, most
representation learning algorithms for time series data are difficult to
interpret. This is due to non-intuitive mappings from data features to salient
properties of the representation and non-smoothness over time. To address this
problem, we propose a new representation learning framework building on ideas
from interpretable discrete dimensionality reduction and deep generative
modeling. This framework allows us to learn discrete representations of time
series, which give rise to smooth and interpretable embeddings with superior
clustering performance. We introduce a new way to overcome the
non-differentiability in discrete representation learning and present a
gradient-based version of the traditional self-organizing map algorithm that is
more performant than the original. Furthermore, to allow for a probabilistic
interpretation of our method, we integrate a Markov model in the representation
space. This model uncovers the temporal transition structure, improves
clustering performance even further and provides additional explanatory
insights as well as a natural representation of uncertainty. We evaluate our
model in terms of clustering performance and interpretability on static
(Fashion-)MNIST data, a time series of linearly interpolated (Fashion-)MNIST
images, a chaotic Lorenz attractor system with two macro states, as well as on
a challenging real world medical time series application on the eICU data set.
Our learned representations compare favorably with competitor methods and
facilitate downstream tasks on the real world data.Comment: Accepted for publication at the Seventh International Conference on
Learning Representations (ICLR 2019
Scalable PAC-Bayesian Meta-Learning via the PAC-Optimal Hyper-Posterior: From Theory to Practice
Meta-Learning aims to speed up the learning process on new tasks by acquiring
useful inductive biases from datasets of related learning tasks. While, in
practice, the number of related tasks available is often small, most of the
existing approaches assume an abundance of tasks; making them unrealistic and
prone to overfitting. A central question in the meta-learning literature is how
to regularize to ensure generalization to unseen tasks. In this work, we
provide a theoretical analysis using the PAC-Bayesian theory and present a
generalization bound for meta-learning, which was first derived by Rothfuss et
al. (2021). Crucially, the bound allows us to derive the closed form of the
optimal hyper-posterior, referred to as PACOH, which leads to the best
performance guarantees. We provide a theoretical analysis and empirical case
study under which conditions and to what extent these guarantees for
meta-learning improve upon PAC-Bayesian per-task learning bounds. The
closed-form PACOH inspires a practical meta-learning approach that avoids the
reliance on bi-level optimization, giving rise to a stochastic optimization
problem that is amenable to standard variational methods that scale well. Our
experiments show that, when instantiating the PACOH with Gaussian processes and
Bayesian Neural Networks models, the resulting methods are more scalable, and
yield state-of-the-art performance, both in terms of predictive accuracy and
the quality of uncertainty estimates.Comment: 61 pages. arXiv admin note: text overlap with arXiv:2002.0555
Hodge-Aware Contrastive Learning
Simplicial complexes prove effective in modeling data with multiway
dependencies, such as data defined along the edges of networks or within other
higher-order structures. Their spectrum can be decomposed into three
interpretable subspaces via the Hodge decomposition, resulting foundational in
numerous applications. We leverage this decomposition to develop a contrastive
self-supervised learning approach for processing simplicial data and generating
embeddings that encapsulate specific spectral information.Specifically, we
encode the pertinent data invariances through simplicial neural networks and
devise augmentations that yield positive contrastive examples with suitable
spectral properties for downstream tasks. Additionally, we reweight the
significance of negative examples in the contrastive loss, considering the
similarity of their Hodge components to the anchor. By encouraging a stronger
separation among less similar instances, we obtain an embedding space that
reflects the spectral properties of the data. The numerical results on two
standard edge flow classification tasks show a superior performance even when
compared to supervised learning techniques. Our findings underscore the
importance of adopting a spectral perspective for contrastive learning with
higher-order data.Comment: 4 pages, 2 figure